Goto

Collaborating Authors

 wrong question


Out-of-Distribution Detection Methods Answer the Wrong Questions

Li, Yucen Lily, Lu, Daohan, Kirichenko, Polina, Qiu, Shikai, Rudner, Tim G. J., Bruss, C. Bayan, Wilson, Andrew Gordon

arXiv.org Machine Learning

To detect distribution shifts and improve model safety, many out-of-distribution (OOD) detection methods rely on the predictive uncertainty or features of supervised models trained on in-distribution data. In this paper, we critically re-examine this popular family of OOD detection procedures, and we argue that these methods are fundamentally answering the wrong questions for OOD detection. There is no simple fix to this misalignment, since a classifier trained only on in-distribution classes cannot be expected to identify OOD points; for instance, a cat-dog classifier may confidently misclassify an airplane if it contains features that distinguish cats from dogs, despite generally appearing nothing alike. We find that uncertainty-based methods incorrectly conflate high uncertainty with being OOD, while feature-based methods incorrectly conflate far feature-space distance with being OOD. We show how these pathologies manifest as irreducible errors in OOD detection and identify common settings where these methods are ineffective. Additionally, interventions to improve OOD detection such as feature-logit hybrid methods, scaling of model and data size, epistemic uncertainty representation, and outlier exposure also fail to address this fundamental misalignment in objectives. We additionally consider unsupervised density estimation and generative models for OOD detection, which we show have their own fundamental limitations.


Against 'softmaxing' culture

Mwesigwa, Daniel

arXiv.org Artificial Intelligence

AI is flattening culture. Evaluations of "culture" are showing the myriad ways in which large AI models are homogenizing language and culture, averaging out rich linguistic differences into generic expressions. I call this phenomenon "softmaxing culture,'' and it is one of the fundamental challenges facing AI evaluations today. Efforts to improve and strengthen evaluations of culture are central to the project of cultural alignment in large AI systems. This position paper argues that machine learning (ML) and human-computer interaction (HCI) approaches to evaluation are limited. I propose two key conceptual shifts. First, instead of asking "what is culture?" at the start of system evaluations, I propose beginning with the question: "when is culture?" Second, while I acknowledge the philosophical claim that cultural universals exist, the challenge is not simply to describe them, but to situate them in relation to their particulars. Taken together, these conceptual shifts invite evaluation approaches that move beyond technical requirements toward perspectives that are more responsive to the complexities of culture.


'Is This AI Sapient?' Is The Wrong Question To Ask About LaMDA - AI Summary

#artificialintelligence

And so the risk here is not that the AI is truly sentient but that we are well-poised to create sophisticated machines that can imitate humans to such a degree that we cannot help but anthropomorphize them--and that large tech companies can exploit this in deeply unethical ways. As should be clear from the way we treat our pets, or how we've interacted with Tamagotchi, or how we video gamers reload a save if we accidentally make an NPC cry, we are actually very capable of empathizing with the nonhuman. Systems engineer and historian Lilly Ryan warns that what she calls ecto-metadata--the metadata you leave behind online that illustrates how you think--is vulnerable to exploitation in the near future. In her section of the work, Suzanne Kite draws on Lakota ontologies to argue that it is essential to recognize that sapience does not define the boundaries of who (or what) is a "being" worthy of respect. This is the AI ethical dilemma that stands before us: the need to make kin of our machines weighed against the myriad ways this can and will be weaponized against us in the next phase of surveillance capitalism.


'Is This AI Sapient?' Is the Wrong Question to Ask About LaMDA

#artificialintelligence

The uproar caused by Blake Lemoine, a Google engineer who believes that one of the company's most sophisticated chat programs, LaMDA (or Language Model for Dialogue Applications) is sapient, has had a curious element: actual AI ethics experts all but renouncing further discussion of the AI sapience question, or deeming it a distraction. They're right to do so. In reading the edited transcript Lemoine released, it was abundantly clear that LaMDA was pulling from any number of websites to generate its text; its interpretation of a Zen koan could've come from anywhere, and its fable read like an automatically generated story (though its depiction of the monster as "wearing human skin" was a delightfully HAL-9000 touch). There was no spark of consciousness there, just little magic tricks that paper over the cracks. But it's easy to see how someone might be fooled, looking at social media responses to the transcript--with even some educated people expressing amazement and a willingness to believe.


Can AI Save Humanity From Climate Change? That's the Wrong Question

#artificialintelligence

It's this indistinct picture of both what the technology is and what it can do that might engender a look of uncertainty on someone's face when asked the question, "Can AI solve climate change?" "Well," we think, "it must be able to do something," while entirely unsure of just how algorithms are meant to pull us back from the ecological brink. The question is loaded, faulty in its assumptions, and more than a little misleading. It is a vital one, however, and the basic premise of utilizing one of the most powerful tools humanity has ever built to address the most existential threat it has ever faced is one that warrants our genuine attention. Machine learning -- the subset of AI that allows for machines to learn from data without explicit programming -- and climate change advocacy and action are relatively new bedfellows.


"Will china dominate AI?" – is the wrong question

#artificialintelligence

When I spoke at the UK China business forum last month, I discussed this topic in response to an audience question. In the current climate of nationalistic fervour, I see the same question asked in many guises. As a disclosure, these are personal views. Like many in Europe, I take the view of building bridges and of collaboration. I have worked with Chinese AI companies (in robotics) and also enjoyed working with excellent Chinese students who I have mentored.


Should You Build Or Buy AI? That's The Wrong Question

#artificialintelligence

After all, what we're talking about here is becoming AI-capable. Notice that I don't say "AI-centric." While many executives might see the appeal in becoming more "AI-centric" in order to showcase the innovative nature of their brand, I'd encourage these individuals to take a step back to ensure their priorities are in order. Technology is not typically an end goal in itself. Given the scarcity of talent in the AI space, enterprises' competitive differentiators likely won't come from how big of a technology infrastructure they build, but rather how focused and aligned their efforts are in leveraging AI's capabilities toward business objectives and serving the customer.


In Real Life: How Will AI Impact Workplace Learning?

#artificialintelligence

Will AI impact workplace learning? At least, that's the wrong question to be asking right now. Well, remember when offices had typists? I don't, but I've seen it on TV so it must have been real. Imagine if L&D had asked, "How will the desktop computer help us better train our typists?"


Will AI Achieve Consciousness? Wrong Question

#artificialintelligence

When Norbert Wiener, the father of cybernetics, wrote his book The Human Use of Human Beings in 1950, vacuum tubes were still the primary electronic building blocks, and there were only a few actual computers in operation. But he imagined the future we now contend with in impressive detail and with few clear mistakes. More than any other early philosopher of artificial intelligence, he recognized that AI would not just imitate--and replace--human beings in many intelligent activities but would change human beings in the process. "We are but whirlpools in a river of ever-flowing water," he wrote. "We are not stuff that abides, but patterns that perpetuate themselves."


Will AI Achieve Consciousness? Wrong Question

WIRED

When Norbert Wiener, the father of cybernetics, wrote his book The Human Use of Human Beings in 1950, vacuum tubes were still the primary electronic building blocks, and there were only a few actual computers in operation. But he imagined the future we now contend with in impressive detail and with few clear mistakes. More than any other early philosopher of artificial intelligence, he recognized that AI would not just imitate--and replace--human beings in many intelligent activities but change human beings in the process. "We are but whirlpools in a river of ever-flowing water," he wrote. "We are not stuff that abides, but patterns that perpetuate themselves."